Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Robust resource allocation optimization in cognitive wireless network integrating information communication and over-the-air computation
Hualiang LUO, Quanzhong LI, Qi ZHANG
Journal of Computer Applications    2024, 44 (4): 1195-1202.   DOI: 10.11772/j.issn.1001-9081.2023050573
Abstract154)   HTML1)    PDF (1373KB)(61)       Save

To address the power resource limitations of wireless sensors in over-the-air computation networks and the spectrum competition with existing wireless information communication networks, a cognitive wireless network integrating information communication and over-the-air computation was studied, in which the primary network focused on wireless information communication, and the secondary network aimed to support over-the-air computation where the sensors utilized signals sent by the base station of the primary network for energy harvesting. Considering the constraints of the Mean Square Error (MSE) of over-the-air computation and the transmit power of each node in the network, base on the random channel uncertainty, a robust resource optimization problem was formulated, with the objective function of maximizing the sum rate of wireless information communication users. To solve the robust optimization problem effectively, an Alternating Optimization (AO)-Improved Constrained Stochastic Successive Convex Approximation (ICSSCA) algorithm called AO-ICSSCA,was proposed, by which the original robust optimization problem was transformed into deterministic optimization sub-problems, and the downlink beamforming vector of the base station in the primary network, the power factors of the sensors, and the fusion beamforming vector of the fusion center in the secondary network were alternately optimized. Simulation experimental results demonstrate that AO-ICSSCA algorithm achieves superior performance with less computing time compared to the Constrained Stochastic Successive Convex Approximation (CSSCA) algorithm before improvement.

Table and Figures | Reference | Related Articles | Metrics
Recommendation method based on knowledge‑awareness and cross-level contrastive learning
Jie GUO, Jiayu LIN, Zuhong LIANG, Xiaobo LUO, Haitao SUN
Journal of Computer Applications    2024, 44 (4): 1121-1127.   DOI: 10.11772/j.issn.1001-9081.2023050613
Abstract86)   HTML0)    PDF (968KB)(51)       Save

As a kind of side information, Knowledge Graph (KG) can effectively improve the recommendation quality of recommendation models, but the existing knowledge-awareness recommendation methods based on Graph Neural Network (GNN) suffer from unbalanced utilization of node information. To address the above problem, a new recommendation method based on Knowledge?awareness and Cross-level Contrastive Learning (KCCL) was proposed. To alleviate the problem of unbalanced node information utilization caused by the sparse interaction data and noisy knowledge graph that deviate from the true representation of inter-node dependencies during information aggregation, a contrastive learning paradigm was introduced into knowledge-awareness recommendation model of GNN. Firstly, the user-item interaction graph and the item knowledge graph were integrated into a heterogeneous graph, and the node representations of users and items were realized by a GNN based on the graph attention mechanism. Secondly, consistent noise was added to the information propagation aggregation layer for data augmentation to obtain node representations of different levels, and the obtained outermost node representation was compared with the innermost node representation for cross-level contrastive learning. Finally, the supervised recommendation task and the contrastive learning assistance task were jointly optimized to obtain the final representation of each node. Experimental results on DBbook2014 and MovieLens-1m datasets show that compared to the second prior contrastive method, the Recall@10 of KCCL is improved by 3.66% and 0.66%, respectively, and the NDCG@10 is improved by 3.57% and 3.29%, respectively, which verifies the effectiveness of KCCL.

Table and Figures | Reference | Related Articles | Metrics
Efficient clustered routing protocol for intelligent road cone ad-hoc networks based on non-random clustering
Long CHEN, Xuanlin YU, Wen CHEN, Yi YAO, Wenjing ZHU, Ying JIA, Denghong LI, Zhi REN
Journal of Computer Applications    2024, 44 (3): 869-875.   DOI: 10.11772/j.issn.1001-9081.2023040483
Abstract77)   HTML1)    PDF (2650KB)(36)       Save

Existing multi-hop clustered routing protocols for Intelligent Road Cone Ad-hoc Network (IRCAN) suffer from redundancy in network control overhead and the average number of hops for data packet transmission is not guaranteed to be minimal. To solve the above problems, combined with the link characteristics of the network topology, an efficient clustered routing protocol based on non-random retroverted clustering, called Retroverted-Clustering-based Hierarchy Routing RCHR, was proposed. Firstly, the retroverted clustering mechanism based on central extension and the cluster head selection algorithm based on overhearing, cross-layer sharing, and extending the adjacency matrix was proposed. Then, the proposed mechanism and the proposed algorithm were used to generate clusters with retroverted characteristics around sink nodes in sequence, and to select the optimal cluster heads for sink nodes at different directions without additional conditions. Thus, networking control overhead and time were decreased, and the formed network topology was profit for diminishing the average number of hops for data packet transmission. Theoretic analysis validated the effectiveness of the proposed protocol. The simulation experiment results show that compared with Ring-Based Multi-hop Clustering (RBMC) routing protocol and MODified Low Energy Adaptive Clustering Hierarchy (MOD-LEACH) protocol, the networking control overhead and the average number of hops for data packet transmission of the proposed protocol are reduced by 32.7% and 2.6% at least, respectively.

Table and Figures | Reference | Related Articles | Metrics
Few-shot recognition method of 3D models based on Transformer
Hui WANG, Jianhong LI
Journal of Computer Applications    2023, 43 (6): 1750-1758.   DOI: 10.11772/j.issn.1001-9081.2022060952
Abstract259)   HTML18)    PDF (3334KB)(161)       Save

Aiming at the classification problems of Three-Dimensional (3D) models, a method of few-shot recognition of 3D models based on Transformer was proposed. Firstly, the 3D point cloud models of the support and query samples were fed into the feature extraction module to obtain feature vectors. Then, the attention features of the support samples were calculated in the Transformer module. Finally, the cosine similarity network was used to calculate the relation scores between the query samples and the support samples. On ModelNet 40 dataset, compared with the Dual-Long Short-Term Memory (Dual-LSTM) method, the proposed method has the recognition accuracy of 5-way 1-shot and 5-way 5-shot increased by 34.54 and 21.00 percentage points, respectively. At the same time, the proposed method also obtains high accuracy on ShapeNet Core dataset. Experimental results show that the proposed method can recognize new categories of 3D models more accurately.

Table and Figures | Reference | Related Articles | Metrics
Survey of multimodal pre-training models
Huiru WANG, Xiuhong LI, Zhe LI, Chunming MA, Zeyu REN, Dan YANG
Journal of Computer Applications    2023, 43 (4): 991-1004.   DOI: 10.11772/j.issn.1001-9081.2022020296
Abstract1478)   HTML131)    PDF (5539KB)(1170)    PDF(mobile) (3280KB)(91)    Save

By using complex pre-training targets and a large number of model parameters, Pre-Training Model (PTM) can effectively obtain rich knowledge from unlabeled data. However, the development of the multimodal PTMs is still in its infancy. According to the difference between modals, most of the current multimodal PTMs were divided into the image-text PTMs and video-text PTMs. According to the different data fusion methods, the multimodal PTMs were divided into two types: single-stream models and two-stream models. Firstly, common pre-training tasks and downstream tasks used in validation experiments were summarized. Secondly, the common models in the area of multimodal pre-training were sorted out, and the downstream tasks of each model and the performance and experimental data of the models were listed in tables for comparison. Thirdly, the application scenarios of M6 (Multi-Modality to Multi-Modality Multitask Mega-transformer) model, Cross-modal Prompt Tuning (CPT) model, VideoBERT (Video Bidirectional Encoder Representations from Transformers) model, and AliceMind (Alibaba’s collection of encoder-decoders from Mind) model in specific downstream tasks were introduced. Finally, the challenges and future research directions faced by related multimodal PTM work were summed up.

Table and Figures | Reference | Related Articles | Metrics
Mobile robot path planning based on improved SAC algorithm
Yongdi LI, Caihong LI, Yaoyu ZHANG, Guosheng ZHANG
Journal of Computer Applications    2023, 43 (2): 654-660.   DOI: 10.11772/j.issn.1001-9081.2021122053
Abstract456)   HTML21)    PDF (5152KB)(348)       Save

To solve the long training time and slow convergence problems when applying SAC (Soft Actor-Critic) algorithm to the local path planning of mobile robots, a PER-SAC algorithm was proposed by introducing the Prioritized Experience Replay (PER) technique. Firstly, to improve the convergence speed and stability of the robot training process, a priority strategy was applied to extract samples from the experience pool instead of the traditional random sampling and the network prioritized the training of samples with larger errors. Then, the calculation of Temporal-Difference (TD) error was optimized, and the training deviation was reduced. Next, the transfer learning was used to train the robot from a simple environment to a complex one gradually in order to improve the training speed. In addition, an improved reward function was designed to increase the intrinsic reward of robots, and therefore, the sparsity problem of environmental reward was solved. Finally, the simulation was carried out on the ROS (Robot Operating System) platform, and the simulation results show that PER-SAC algorithm outperforms the original algorithm in terms of convergence speed and length of the planned path in different obstacle environments. Moreover, the PER-SAC algorithm can reduce the training time and is significantly better than the original algorithm on path planning performance.

Table and Figures | Reference | Related Articles | Metrics
Customs risk control method based on improved butterfly feedback neural network
Zhenggang WANG, Zhong LIU, Jin JIN, Wei LIU
Journal of Computer Applications    2023, 43 (12): 3955-3964.   DOI: 10.11772/j.issn.1001-9081.2022121873
Abstract159)   HTML1)    PDF (2964KB)(89)       Save

Aiming at the problems of low efficiency, low accuracy, excessive occupancy of human resources and intelligent classification algorithm miniaturization deployment requirements in China Customs risk control methods at this stage, a customs risk control method based on an improved Butterfly Feedback neural Network Version 2 (BFNet-V2) was proposed. Firstly, the Filling in Code (FC) algorithm was used to realize the semantic replacement of the customs tabular data to the analog image. Then, the analog image data was trained by using the BFNet-V2. The regular neural network structure was composed of left and right links, different convolution kernels and blocks, and small block design, and the residual short path was added to improve the overfitting and gradient disappearance. Finally, a Historical momentum Adaptive moment estimation algorithm (H-Adam) was proposed to optimize the gradient descent process and achieve a better adaptive learning rate adjustment, and classify customs data. Xception (eXtreme inception), Mobile Network (MobileNet), Residual Network (ResNet), and Butterfly Feedback neural Network (BF-Net) were selected as the baseline network structures for comparison. The Receiver Operating Characteristic curve (ROC) and the Precision-Recall curve (PR) of the BFNet-V2 contain the curves of the baseline network structures. Taking Transfer Learning (TL) as an example, compared with the four baseline network structures, the classification accuracy of BFNet-V2 increases by 4.30%,4.34%,4.10% and 0.37% respectively. In the process of classifying real-label data, the misjudgment rate of BFNet-V2 reduces by 70.09%,57.98%,58.36% and 10.70%, respectively. The proposed method was compared with eight classification methods including shallow and deep learning methods, and the accuracies on three datasets increase by more than 1.33%. The proposed method can realize automatic classification of tabular data and improve the efficiency and accuracy of customs risk control.

Table and Figures | Reference | Related Articles | Metrics
Multi-scale feature enhanced retinal vessel segmentation algorithm based on U-Net
Zhiang ZHANG, Guangzhong LIAO
Journal of Computer Applications    2023, 43 (10): 3275-3281.   DOI: 10.11772/j.issn.1001-9081.2022091437
Abstract262)   HTML16)    PDF (2624KB)(142)       Save

Aiming at the shortcomings of traditional retinal vessel segmentation algorithm such as low accuracy of vessel segmentation and mis-segmentation of focal areas, a Multi-scale Feature Enhanced retinal vessel segmentation algorithm based on U-Net (MFEU-Net) was proposed. Firstly, in order to solve the vanishing gradient problem, an improved Feature Information Enhancement Residual Module (FIE-RM) was designed to replace the convolution block of U-Net. Secondly, in order to enlarge the receptive field and improve the extraction ability of vascular information features, a multi-scale dense atrous convolution module was introduced at the bottom of U-Net. Finally, in order to reduce the information loss in the process of encoding and decoding, a multi-scale channel enhancement module was constructed at the skip connection of U-Net. Experimental results on Digital Retinal Images for Vessel Extraction (DRIVE) and CHASE_DB1 datasets show that compared with CS-Net (Channel and Spatial attention Network), the suboptimal algorithm in retinal vessel segmentation, MFEU-Net has the F1 score improved by 0.35 and 1.55 percentage points respectively, and the Area Under Curve (AUC) improved by 0.34 and 1.50 percentage points respectively. It is verified that MFEU-Net can improve the accuracy and robustness of retinal vessel segmentation effectively.

Table and Figures | Reference | Related Articles | Metrics
Car-following model of intelligent connected vehicles based on time-delayed velocity difference and velocity limit
Kaiwang ZHANG, Fei HUI, Guoxiang ZHANG, Qi SHI, Zhizhong LIU
Journal of Computer Applications    2022, 42 (9): 2936-2942.   DOI: 10.11772/j.issn.1001-9081.2021081425
Abstract239)   HTML3)    PDF (2663KB)(77)       Save

Focusing on the problems of disturbed car-following behavior and instability of traffic flow caused by the uncertainty of the driver’s acquisition of road velocity limit and time delay information, a car-following model TD-VDVL (Time-Delayed Velocity Difference and Velocity limit) was proposed with the consideration of the time-delayed velocity difference and the velocity limit information in the Internet of Vehicles (IoV) environment. Firstly, the speed change caused by time delay and road velocity limit information were introduced to improve the Full Velocity Difference (FVD) model. Then, the linear spectrum wave perturbation method was used to derive the traffic flow stability judgment basis of TD-VDVL model, and the influence of each parameter in the model on the stability of the system was analyzed separately. Finally, the numerical simulation experiments and comparative analysis were carried out using Matlab. In the simulation experiments, straight roads and circular roads were selected, and slight disturbance was imposed on the fleet during driving. When conditions were the same, TD-VDVL model had the smallest velocity fluctuation rate and the fluctuation of fleet headway compared to the Optimal Velocity (OV) and FVD models. Especially when the sensitivity coefficient of the velocity limit information was 0.3, and the sensitivity coefficient of the time-delayed speed difference was 0.3, the proposed model had the average fluctuation rate of the fleet velocity reached 2.35% at time of 500 s, and the peak and valley difference of fleet headway of only 0.019 4 m. Experimental results show that TD-VDVL model has a better stable area after introducing time-delayed velocity difference and velocity limit information, and can significantly enhance the ability of car-following fleet to absorb disturbance.

Table and Figures | Reference | Related Articles | Metrics
Automatic detection algorithm for attention deficit/hyperactivity disorder based on speech pause and flatness
Guozhong LI, Ya CUI, Yixin EMU, Ling HE, Yuanyuan LI, Xi XIONG
Journal of Computer Applications    2022, 42 (9): 2917-2925.   DOI: 10.11772/j.issn.1001-9081.2021071213
Abstract227)   HTML3)    PDF (1994KB)(47)       Save

The clinicians diagnose Attention Deficit/Hyperactivity Disorder (ADHD) mainly based on on their subjective assessment, which lacks objective criteria to assist. To solve this problem, an automatic detection algorithm for ADHD based on speech pause and flatness was proposed. Firstly, the Frequency band Difference Energy Entropy Product (FDEEP) parameter was used to automatically locate the segment with voice from the speech and extract the speech pause features. Then, Transform Average Amplitude Squared Difference (TAASD) parameter was presented to calculate the voice multi-frequency and extract the flatness features. Finally, fusion features and the Support Vector Machine (SVM) classifier were combined to realize the automatic recognition of ADHD. The speech samples of the experiment were collected from 17 normal control children and 37 children with ADHD. Experimental results show that the proposed algorithm can effectively discriminate the normal children and children with ADHD, with an accuracy of 91.38%.

Table and Figures | Reference | Related Articles | Metrics
GPU-based method for evaluating algebraic properties of cryptographic S-boxes
Jingwen CAI, Yongzhuang WEI, Zhenghong LIU
Journal of Computer Applications    2022, 42 (9): 2750-2756.   DOI: 10.11772/j.issn.1001-9081.2021081382
Abstract334)   HTML4)    PDF (2206KB)(97)       Save

Cryptographic S-boxes (or black boxes) are nonlinear components in symmetric encryption algorithms, and their algebraic properties usually determine the security performance of these encryption algorithms. Differential uniformity, nonlinearity and revised transparency order are three basic indicators to evaluate the security properties of cryptographic S-boxes. They describe the S-box’s ability against differential cryptanalysis, linear cryptanalysis and differential power attack respectively. When the input size of the cryptographic S-box is large (for example, the input length of the S-box is larger than 15 bits), the needed solving time in Central Processing Unit (CPU) is still too long, or even the solution is impracticable. How to evaluate the algebraic properties of the large-size S-box quickly is currently a research hot point in the field. Therefore, a method to evaluate the algebraic properties of cryptographic S-boxes quickly was proposed on the basis of Graphics Processing Unit (GPU). In this method, the kernel functions were split into multiple threads by slicing technique, and an optimization scheme was proposed by combining the characteristics of solving differential uniformity, nonlinearity and revised transparency order to realize parallel computing. Experimental results show that compared with CPU-based implementation environment, single GPU based environment has the implementation efficiency significantly improved. Specifically, the time spent on calculating differential uniformity, nonlinearity, and revised transparency order is saved by 90.28%, 80%, and 66.67% respectively, which verifies the effectiveness of this method.

Table and Figures | Reference | Related Articles | Metrics
Real-time traffic sign detection algorithm based on improved YOLOv3
Dawei ZHANG, Xuchong LIU, Wei ZHOU, Zhuhui CHEN, Yao YU
Journal of Computer Applications    2022, 42 (7): 2219-2226.   DOI: 10.11772/j.issn.1001-9081.2021050731
Abstract373)   HTML20)    PDF (3218KB)(135)       Save

Aiming at the problems of slow detection and low recognition accuracy of road traffic signs in Chinese intelligent driving assistance system, an improved road traffic sign detection algorithm based on YOLOv3 (You Only Look Once version 3) was proposed. Firstly, MobileNetv2 was introduced into YOLOv3 as the basic feature extraction network to construct an object detection network module MN-YOLOv3 (MobileNetv2-YOLOv3). And two Down-up links were added to the backbone network of MN-YOLOv3 for feature fusion, thereby reducing the model parameters, and improving the running speed of the detection module as well as information fusion performance of the multi-scale feature maps. Then, according to the shape characteristics of traffic sign objects, K-Means++ algorithm was used to generate the initial cluster center of the anchor, and the DIOU (Distance Intersection Over Union) loss function was introduced to combine DIOU and Non-Maximum Suppression (NMS) for the bounding box regression. Finally, the Region Of Interest (ROI) and the context information were unified by ROI Align and merged to enhance the object feature expression. Experimental results show that the proposed algorithm has better performance, and the mean Average Precision (mAP) of the algorithm on the dataset CSUST (ChangSha University of Science and Technology) Chinese Traffic Sign Detection Benchmark (CCTSDB) can reach 96.20%. Compared with Faster R-CNN (Region Convolutional Neural Network), YOLOv3 and Cascaded R-CNN detection algorithms, the proposed algorithm has better real-time performance, higher detection accuracy, and is more robustness to various environmental changes.

Table and Figures | Reference | Related Articles | Metrics
Real root isolation algorithm for exponential function polynomials
Xinyu GE, Shiping CHEN, Zhong LIU
Journal of Computer Applications    2022, 42 (5): 1531-1537.   DOI: 10.11772/j.issn.1001-9081.2021030440
Abstract194)   HTML1)    PDF (503KB)(41)       Save

For addressing real root isolation problem of transcendental function polynomials, an interval isolation algorithm for exponential function polynomials named exRoot was proposed. In the algorithm, the real root isolation problem of non-polynomial real functions was transformed into sign determination problem of polynomial, then was solved. Firstly, the Taylor substitution method was used to construct the polynomial nested interval of the objective function. Then, the problem of finding the root of the exponential function was transformed into the problem of determining the positivity and negativity of the polynomial in the intervals. Finally, a comprehensive algorithm was given and applied to determine the reachability of rational eigenvalue linear system tentatively. The proposed algorithm was implemented in Maple efficiently and easily with readable output results. Different from HSOLVER and numerical calculation method fsolve, exRoot avoids discussing the existence of roots directly, and theoretically has termination and completeness. It can reach any precision and can avoid the systematic error brought by numerical solution when being applied into the optimization problem.

Table and Figures | Reference | Related Articles | Metrics
Quality judgment of 3D face point cloud based on feature fusion
Gong GAO, Hongyu YANG, Hong LIU
Journal of Computer Applications    2022, 42 (3): 968-973.   DOI: 10.11772/j.issn.1001-9081.2021030414
Abstract231)   HTML6)    PDF (861KB)(71)       Save

A Feature Fusion Network (FFN) was proposed to judge the quality of 3D face point cloud acquired by binocular structured light scanner. Firstly, the 3D point cloud was preprocessed to cut out the face area, and the image obtained from the point cloud and the corresponding 2D plane projection was used as the input. Secondly, Dynamic Graph Convolutional Neural Network (DGCNN) and ShuffleNet were trained for point cloud learning. Then, the middle layer features of the two network modules were extracted and fused to fine-tune the whole network. Finally, three full connected layers were used to realize the five-class classification of 3D face point cloud (excellent, ordinary, stripe, burr, deformation). The proposed FFN achieved the classification accuracy of 83.7%, which was 5.8% higher than that of ShufflNet and 2.2% higher than that of DGCNN. The experimental results show that the weighted fusion of two-dimensional image features and point cloud features can achieve the complementary effect between different features.

Table and Figures | Reference | Related Articles | Metrics
Online kernel regression based on random sketching method
Qinghua LIU, Shizhong LIAO
Journal of Computer Applications    2022, 42 (3): 676-682.   DOI: 10.11772/j.issn.1001-9081.2021040869
Abstract271)   HTML19)    PDF (628KB)(98)       Save

In online kernel regression learning, the inverse matrix of the kernel matrix needs to be calculated when a new sample arrives, and the computational complexity is at least the square of the number of rounds. The idea of applying sketching method to hypothesis updating was introduced, and a more efficient online kernel regression algorithm via sketching method was proposed. Firstly, The loss function was set as the square loss, a new gradient descent algorithm, called FTL-Online Kernel Regression (F-OKR) was proposed, using the Nystr?m approximation method to approximate the Kernel, and applying the idea of Follow-The-Leader (FTL). Then, sketching method was used to accelerate F-OKR so that the computational complexity of F-OKR was reduced to the level of linearity with the number of rounds and sketch scale, and square with the data dimension. Finally, an efficient online kernel regression algorithm called Sketched Online Kernel Regression (SOKR) was designed. Compared to F-OKR, SOKR had no change in accuracy and reduced the runtime by about 16.7% on some datasets. The sub-linear regret bounds of these two algorithms were proved, and experimental results on standard regression datasets also verify that the algorithms have better performance than NOGD (Nystr?m Online Gradient Descent) algorithm, the average loss of all the datasets was reduced by about 64%.

Table and Figures | Reference | Related Articles | Metrics
Multi-attention fusion network for medical image segmentation
Hong LI, Junying ZOU, Xicheng TAN, Guiyang LI
Journal of Computer Applications    2022, 42 (12): 3891-3899.   DOI: 10.11772/j.issn.1001-9081.2021101737
Abstract501)   HTML16)    PDF (1600KB)(247)       Save

In the field of deep medical image segmentation, TransUNet (merit both Transformers and U-Net) is one of the current advanced segmentation models. However, the local connection between adjacent blocks in its encoder is not considered, and the inter-channel information is not interactive during the upsampling process of the decoder. To address the above problems, a Multi-attention FUsion Network (MFUNet) model was proposed. Firstly, a Feature Fusion Module (FFM) was introduced in encoder part to enhance the local connections between adjacent blocks in the Transformer and maintain the spatial location relationships of the images themselves. Then, a Double Channel Attention (DCA) module was introduced in the decoder part to fuse the channel information of multi-level features, which enhanced the sensitivity of the model to the key information between channels. Finally, the model's constraints on the segmentation results was strengthened by combining cross-entropy loss and Dice loss. By conducting experiments on Synapse and ACDC public datasets, it can be seen that MFUNet achieves Dice Similarity Coefficient (DSC) of 81.06% and 90.91%, respectively. Compared with the baseline model TransUNet, MFUNet achieved an 11.5% reduction in Hausdorff Distance (HD) on the Synapse dataset, and improved segmentation accuracy by 1.43 and 3.48 percentage points on the ACDC dataset for both the right ventricular and myocardial components, respectively. The experimental results show that MFUNet can achieve better segmentation results in both internal filling and edge prediction of medical images, which can help improve the diagnostic efficiency of doctors in clinical practice.

Table and Figures | Reference | Related Articles | Metrics
Low density parity check code decoding acceleration technology based on GPU
Qidi XU, Zhenghong LIU, Lin ZHENG
Journal of Computer Applications    2022, 42 (12): 3841-3846.   DOI: 10.11772/j.issn.1001-9081.2021101726
Abstract218)   HTML5)    PDF (1785KB)(69)       Save

With the development of communication technology, communication terminals gradually adopt software to be compatible with multiple communication modes and protocols. As in the traditional software radio architecture with a Central Processing Unit (CPU) of computer as an arithmetic unit, the wideband data throughput of high-speed wireless communication systems such as Multiple-Input Multiple-Output (MIMO) is not be satisfied, an acceleration method of Low Density Parity Check (LDPC) code decoder based on Graphics Processing Unit (GPU) was proposed. Firstly, according to the theoretical analysis of the acceleration performance of GPU parallelly accelerated heterogeneous computing in GNU Radio 4G/5G physical layer signal processing module, a more parallelly efficient Layered Normalized Min-Sum (LNMS) algorithm was adopted. Then, the decoding delay of the decoder was reduced by using the methods such as global synchronization strategy, reasonably allocation of GPU memory space and stream parallelism mechanism. At the same time, the LDPC code decoding process was optimized in parallel with the multi-threaded parallel technology in GPU. Finally, the GPU accelerated decoder was implemented and verified on the software radio platform, and the bit error rate performance and acceleration performance bottlenecks of the parallel decoder were analyzed. Experimental results show that compared with the traditional CPU serial code processing method, CPU+GPU heterogeneous platform has the decoding rate for LDPC codes increased to about 200 times, and the throughput of decoder can reach more than 1 Gb/s, especially in the case of large-scale data, the decoding performance is greatly improved compared with traditional decoder.

Table and Figures | Reference | Related Articles | Metrics
Few‑shot target detection based on negative‑margin loss
Yunyan DU, Hong LI, Jinhui YANG, Yu JIANG, Yao MAO
Journal of Computer Applications    2022, 42 (11): 3617-3624.   DOI: 10.11772/j.issn.1001-9081.2021091683
Abstract190)   HTML6)    PDF (1509KB)(70)       Save

Most of the existing target detection algorithms rely on large?scale annotation datasets to ensure the accuracy of detection, however, it is difficult for some scenes to obtain a large number of annotation data and it consums a lot of human and material resources. In order to resolve this problem, a Few?Shot Target Detection method based on Negative Margin loss (NM?FSTD) was proposed. The negative margin loss method belonging to metric learning in Few?Shot Learning (FSL) was introduced into target detection, which could avoid mistakenly mapping the samples of the same novel classes to multiple peaks or clusters and helping to the classification of novel classes in few?shot target detection. Firstly, a large number of training samples and the target detection framework based on negative margin loss were used to train the model with good generalization performance. Then, the model was finetuned through a small number of labeled target category samples. Finally, the finetuned model was used to detect the new sample of target category. To verify the detection effect of NM?FSTD, MS COCO was used for training and evaluation. Experimental results show that the AP50 of NM?FSTD reaches 22.8%; compared with Meta R?CNN (Meta Regions with CNN features) and MPSR (Multi?Scale Positive Sample Refinement), the accuracies are improved by 3.7 and 4.9 percentage points, respectively. NM?FSTD can effectively improve the detection performance of target categories in the case of few?shot, and solve the problem of insufficient data in the field of target detection.

Table and Figures | Reference | Related Articles | Metrics
Cross-modal tensor fusion network based on semantic relation graph for image-text retrieval
Changhong LIU, Sheng ZENG, Bin ZHANG, Yong CHEN
Journal of Computer Applications    2022, 42 (10): 3018-3024.   DOI: 10.11772/j.issn.1001-9081.2021091622
Abstract299)   HTML23)    PDF (2407KB)(166)       Save

The key of cross-modal image-text retrieval is how to capture the semantic correlation between images and text effectively. Most of the existing methods learn the global semantic correlation between image region features and text features or local semantic correlation between inter-modality objects, and ignore the correlation between the intra-modality object relationships and inter-modality object relationships. To solve this problem, a method of Cross-Modal Tensor Fusion Network based on Semantic Relation Graph (CMTFN-SRG) for image-text retrieval was proposed. Firstly, the relationships of image regions and text words were generated by Graph Convolutional Network (GCN) and Bidirectional Gated Recurrent Unit (Bi-GRU) respectively. Then, the fine-grained semantic correlation between the data of two modals was learned by using the tensor fusion network to match the learned semantic relation graph of image regions and the graph of text words. At the same time, Gated Recurrent Unit (GRU) was used to learn global features of the image, and the global features of the image and the text were matched to capture the inter-modality global semantic correlation. The proposed method was compared with the Multi-Modality Cross Attention (MMCA) method on the benchmark datasets Flickr30K and MS-COCO. Experimental results show that the proposed method improves the Recall@1 of text-to-image retrieval task by 2.6%, 9.0% and 4.1% respectively on the test datasets Flickr30K, MS-COCO1K and MS-COCO5K.And mean Recall (mR) improves by 0.4, 1.3 and 0.1 percentage points respectively. It can be seen that the proposed method can effectively improve the precision of image-text retrieval.

Table and Figures | Reference | Related Articles | Metrics
Survey of event extraction
Chunming MA, Xiuhong LI, Zhe LI, Huiru WANG, Dan YANG
Journal of Computer Applications    2022, 42 (10): 2975-2989.   DOI: 10.11772/j.issn.1001-9081.2021081542
Abstract909)   HTML139)    PDF (3054KB)(539)       Save

The event that the user is interested in is extracted from the unstructured information, and then displayed to the user in a structured way, that is event extraction. Event extraction has a wide range of applications in information collection, information retrieval, document synthesis, and information questioning and answering. From the overall perspective, event extraction algorithms can be divided into four categories: pattern matching algorithms, trigger lexical methods, ontology-based algorithms, and cutting-edge joint model methods. In the research process, different evaluation methods and datasets can be used according to the related needs, and different event representation methods are also related to event extraction research. Distinguished by task type, meta-event extraction and subject event extraction are the two basic tasks of event extraction. Among them, meta-event extraction has three methods based on pattern matching, machine learning and neural network respectively, while there are two ways to extract subjective events: based on the event framework and based on ontology respectively. Event extraction research has achieved excellent results in single languages such as Chinese and English, but cross-language event extraction still faces many problems. Finally, the related works of event extraction were summarized and the future research directions were prospected in order to provide guidelines for subsequent research.

Table and Figures | Reference | Related Articles | Metrics
Artificial bee colony algorithm based on multi-population combination strategy
Wenxia LI, Linzhong LIU, Cunjie DAI, Yu LI
Journal of Computer Applications    2021, 41 (11): 3113-3119.   DOI: 10.11772/j.issn.1001-9081.2021010064
Abstract396)   HTML32)    PDF (757KB)(257)       Save

In view of the disadvantages of the standard Artificial Bee Colony (ABC) algorithm such as weak development ability and slow convergence, a new ABC algorithm based on multi-population combination strategy was proposed. Firstly, the different-dimensional coordination and multi-dimensional matching update mechanisms were introduced into the search equation. Then, two combination strategies were designed for the hire bee and the follow bee respectively. The combination strategy was composed of two sub-strategies focusing on breadth exploration and depth development respectively. In the follow bee stage, the population was divided into free subset and non-free subset, and different sub-strategies were adopted by the individuals belonging to different subsets to balance the exploration and development ability of algorithm. The 15 benchmark functions were used to compare the proposed improved ABC algorithm with the standard ABC algorithm and other three improved ABC algorithms. The results show that the proposed algorithm has better optimization performance in both low-dimensional and high-dimensional problems.

Table and Figures | Reference | Related Articles | Metrics
Multi-objective automatic identification and localization system in mobile cellular networks
MIAO Sheng, DONG Liang, DONG Jian'e, ZHONG Lihui
Journal of Computer Applications    2019, 39 (11): 3343-3348.   DOI: 10.11772/j.issn.1001-9081.2019040672
Abstract473)      PDF (905KB)(253)       Save
Aiming at difficult multi-target identification recognition and low localization accuracy in mobile cellular networks, a multi-objective automatic identification and localization method was presented based on cellular network structure to improve the detection efficiency of target number and the localization accuracy of each target. Firstly, multi-target existence was detected through the analysis of the result variance of multiple positioning in the monitoring area. Secondly, cluster analysis on locating points was conducted by k-means unsupervised learning in this study. As it is difficult to find an optimal cluster number for k-means algorithm, a k-value fission algorithm based on beam resolution was proposed to determine the k value, and then the cluster centers were determined. Finally, to enhance the signal-to-noise ratio of received signals, the beam directions were determined according to cluster centers. Then, each target was respectively positioned by Time Difference Of Arrival (TDOA) algorithm by the different beam direction signals received by the linear constrained narrow-band beam former. The simulation results show that, compared to other TDOA and Probability Hypothesis Density (PHD) filter algorithms in recent references, the presented multi-objective automatic identification and localization method for solving multi-target localization problems can improve the signal-to-noise ratio of the received signals by about 10 dB, the Cramer-Mero lower bound of the delay estimation error can be reduced by 67%, and the relative accuracy of the positioning accuracy can be increased more than 10 percentage points. Meanwhile, the proposed algorithm is simple and effective, is relatively independent in each positioning, has a linear time complexity, and is relatively stable.
Reference | Related Articles | Metrics
Multi-target detection via sparse recovery of least absolute shrinkage and selection operator model
HONG Liugen, ZHENG Lin, YANG Chao
Journal of Computer Applications    2017, 37 (8): 2184-2188.   DOI: 10.11772/j.issn.1001-9081.2017.08.2184
Abstract1124)      PDF (828KB)(483)       Save
Focusing on the issue that the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm may introduce some false targets in moving target detection with the presence of multipath reflections, a descending dimension method for designed matrix based on LASSO was proposed. Firstly, the multipath propagation increases the spatial diversity and provides different Doppler shifts over different paths. In addition, the application of broadband OFDM signal provides frequency diversity. The introduction of spatial diversity and frequency diversity to the system causes target space sparseness. Sparseness of multiple paths and environment knowledge were applied to estimate paths along the receiving target responses. Simulation results show that the improved LASSO algorithm based on the descending dimension method for designed matrix has better detection performance than the traditional algorithms such as Basis Pursuit (BP), Dantzig Selector (DS) and LASSO at the Signal-to-Noise Ratio (SNR) of -5 dB, and the target detection probability of the improved LASSO algorithm was 30% higher than that of LASSO at the false alarm rate of 0.1. The proposed algorithm can effectively filter the false targets and improve the radar target detection probability.
Reference | Related Articles | Metrics
Hardware/Software co-design of SM2 encryption algorithm based on the embedded SoC
ZHONG Li, LIU Yan, YU Siyang, XIE Zhong
Journal of Computer Applications    2015, 35 (5): 1412-1416.   DOI: 10.11772/j.issn.1001-9081.2015.05.1412
Abstract726)      PDF (797KB)(693)       Save

Concerning the problem that the development cycle of existing elliptic curve algorithm system level design is long and the performance-overhead indicators are not clear, a method of Hardware/Software (HW/SW) co-design based on Electronic System Level (ESL) was proposed. This method presented several HW/SW partitions by analyzing the theories and implementations of SM2 algorithm, and generated cycle-accurate models for HW modules with SystemC. Module and system verification were proposed to compare the executing cycle counts of HW/SW modules to obtain the best partition. Finally, the ESL models were converted to Rigister Transfer Level (RTL) models according to the CFG (Control Flow Graph) and DFG (Data Flow Graph) to perform logic synthesis and comparison. In the condition of 50 MHz,180 nm CMOS technology, when getting best performance,the execute time of point-multiply was 20 ms, with 83 000 gates and the power consuption was 2.23 mW. The experimental result shows that the system analysis is conducive to performance and resources evaluation, and has high applicability in encryption chip based on elliptic curve algorithm. The embedded SoC (System on Chip) based on this algorithm can choose appropriate architecture based on performance and resource constraints.

Reference | Related Articles | Metrics
High quality positron emission tomography reconstruction algorithm based on correlation coefficient and forward-and-backward diffusion
SHANG Guanhong LIU Yi ZHANG Quan GUI Zhiguo
Journal of Computer Applications    2014, 34 (5): 1482-1485.   DOI: 10.11772/j.issn.1001-9081.2014.05.1482
Abstract234)      PDF (752KB)(349)       Save

In Positron Emission Tomography (PET) computed imaging, traditional iterative algorithms have the problem of details loss and fuzzy object edges. A high quality Median Prior (MP) reconstruction algorithm based on correlation coefficient and Forward-And-Backward (FAB) diffusion was proposed to solve the problem in this paper. Firstly, a characteristic factor called correlation coefficient was introduced to represent the image local gray information. Then through combining the correlation coefficient and forward-and-backward diffusion model, a new model was made up. Secondly, considering that the forward-and-backward diffusion model has the advantages of dealing with background and edge separately, the proposed model was applied to Maximum A Posterior (MAP) reconstruction algorithm of the median prior distribution, thus a median prior reconstruction algorithm based on forward-and-backward diffusion was obtained. The simulation results show that, the new algorithm can remove the image noise while preserving object edges well. The Signal-to-Noise Ratio (SNR) and Root Mean Squared Error (RMSE) also show visually the improvement of the reconstructed image quality.

Reference | Related Articles | Metrics
Culling of foreign matter fake information in detection of subminiature accessory based on prior knowledge
ZHEN Rongjie WANG Zhong LIU Wenjing GOU Jiansong
Journal of Computer Applications    2014, 34 (5): 1458-1462.   DOI: 10.11772/j.issn.1001-9081.2014.05.1458
Abstract196)      PDF (810KB)(385)       Save

In visual detection of subminiature accessory, the extracted target contour will be affected by the existence of foreign matter in the field like dust and hair crumbs. In order to avoid the impact for measurement brought by foreign matter, a method of culling foreign matter fake information based on prior knowledge was put forward. Firstly, the corners of component image with foreign matter were detected. Secondly, the corner-distribution features of standard component were obtained by statistics. Finally, the judgment condition of foreign matter fake imformation was derived from the corner-distribution features of standard component to cull the foreign matter fake information. Through successful application in an actual engineering project, the processing experiments on three typical images with foreign matter prove that the proposed algorithm ensures the accuracy of the measurement, while effectively culling the foreign matter fake information in the images.

Reference | Related Articles | Metrics
Encryption scheme of certificateless and leakage-resilient private key
YU Qihong LI Jiguo
Journal of Computer Applications    2014, 34 (5): 1292-1295.   DOI: 10.11772/j.issn.1001-9081.2014.05.1292
Abstract262)      PDF (736KB)(329)       Save

A lot of side channel attacks and cold boot attacks can leak secret information of cryptographic systems and destroy the security of traditional cryptographic schemes. This paper presented a certificateless encryption scheme which can resist the private key leakage. Based on the q-ABDHE (Augmented Bilinear Diffie-Hellman Exponent) hypothesis, the security of the scheme was proved. The leakage-resilient property was obtained via extractor. The leakage-resilient performance was analyzed. The theoretical analyses show that the relative leakage rate of private key can reach 1/8.

Reference | Related Articles | Metrics
Extension of contradiction problem-oriented description logic SHOQ
WANG Jing WANG Hong LI Jian FAN Hongjie
Journal of Computer Applications    2014, 34 (4): 1139-1143.   DOI: 10.11772/j.issn.1001-9081.2014.04.1139
Abstract381)      PDF (828KB)(380)       Save

In order to apply reasoning rules of the description logic to analyze and solve the simple contradiction problem, the extension set was introduced to be the set theory foundation of the description logic SHOQ, and a new description logic named D-SHOQES (Dynamic Description Logic SHOQ Based on Extension Set) was proposed. The cut sets of extension concepts and extension roles were defined as atomic concepts and atomic roles, and the action theory was injected to obtain the qualitative change domain and the quantitative change domain of the concepts and roles. The semantics of concepts, roles and actions in D-SHOQES were given, as well as the Tableau-algorithm reasoning rules. Finally, the method of solving contradiction problem was researched, which offered a strategy for the solution to contradiction problem.

Reference | Related Articles | Metrics
Three-queue job scheduling algorithm based on Hadoop
ZHU Jie ZHAO Hong LI Wenrui
Journal of Computer Applications    2014, 34 (11): 3227-3230.   DOI: 10.11772/j.issn.1001-9081.2014.11.3227
Abstract184)      PDF (756KB)(524)       Save

Single queue job scheduling algorithm in homogeneous Hadoop cluster causes short jobs waiting and low utilization rate of resources; multi-queue scheduling algorithms solve problems of unfairness and low execution efficiency, but most of them need setting parameters manually, occupy resources each other and are more complex. In order to resolve these problems, a kind of three-queue scheduling algorithm was proposed. The algorithm used job classifications, dynamic priority adjustment, shared resource pool and job preemption to realize fairness, simplify the scheduling flow of normal jobs and improve concurrency. Comparison experiments with First In First Out (FIFO) algorithm were given under three kinds of situations, including that the percentage of short jobs is high, the percentages of all types of jobs are similar, and the general jobs are major with occasional long and short jobs. The proposed algorithm reduced the running time of jobs. The experimental results show that the execution efficiency increase of the proposed algorithm is not obvious when the major jobs are short ones; however, when the assignments of all types of jobs are balanced, the performance is remarkable. This is consistent with the algorithm design rules: prioritizing the short jobs, simplifying the scheduling flow of normal jobs and considering the long jobs, which improves the scheduling performance.

Reference | Related Articles | Metrics
Simple efficient bit-flipping decoding algorithm for low density parity check code
ZHANG Gaoyuan WEN Hong LI Tengfei SONG Huanhuan
Journal of Computer Applications    2014, 34 (10): 2796-2799.   DOI: 10.11772/j.issn.1001-9081.2014.10.2796
Abstract630)      PDF (625KB)(384)       Save

To improve the efficiency of the Bit Flipping (BF), a weighted gradient descent bit-flipping decoding algorithm based on average magnitude was proposed for Low Density Parity Check (LDPC) code. The average magnitude of the information nodes was first introduced as the reliability of the parity checks, which was used to weigh the bipolar syndrome, and then an effective bit-flipping function was obtained. Simulation was conducted at Bit-Error Rate (BER) of 10-5 under an Additive White Gaussian Noise (AWGN) channel, and coding gains of 0.08 and 0.29 dB were achieved in comparison to conventional weighted Gradient Descent Bit-Flipping (GDBF) and Reliability Ratio based Weighted Gradient Descent Bit-Flipping (RRWGDBF) algorithms while the average number of decoding iterations was reduced by 72.6% and 9.3%, respectively. The simulation results show that the improved algorithm outperforms the conventional algorithms while average decoding number is also reduced. It indicates that this new scheme can better balance error-correcting ability, decoding complexity and delay, which can be applied to high-speed communication system with high real-time requirement.

Reference | Related Articles | Metrics